Efficient mixed-order hidden Markov model inference

نویسندگان

  • Ludwig Schwardt
  • Johan A. du Preez
چکیده

Recent studies have shown that high-order hidden Markov models (HMMs) are feasible and useful for spoken language processing. This paper extends the fixed-order versions to ergodic mixedorder HMMs, which allow the modelling of variable-length contexts with significantly less parameters. A novel training procedure automatically infers the number of states and the topology of the HMM from the training set, based on information-theoretic criteria. This is done by incorporating only high-order contexts with sufficient support in the data. The mixed-order training algorithm is faster than fixed-order methods, with similar classification performance in language identification tasks.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Stochastic Variational Inference for the HDP-HMM

We derive a variational inference algorithm for the HDP-HMM based on the two-level stick breaking construction. This construction has previously been applied to the hierarchical Dirichlet processes (HDP) for mixed membership models, allowing for efficient handling of the coupled weight parameters. However, the same algorithm is not directly applicable to HDP-based infinite hidden Markov models ...

متن کامل

An Introduction to Hidden Markov Models and Bayesian Networks

We provide a tutorial on learning and inference in hidden Markov models in the context of the recent literature on Bayesian networks. This perspective makes it possible to consider novel generalizations of hidden Markov models with multiple hidden state variables, multiscale representations, and mixed discrete and continuous variables. Although exact inference in these generalizations is usuall...

متن کامل

Efficient high-order hidden Markov modelling

We present two powerful tools which allow efficient training of arbitrary (including mixed and infinite) order hidden Markov models. The method rests on two parts: an algorithm which can convert high-order models to an equivalent first-order representation (ORder rEDucing), and a Fast (order) Incremental Training algorithm. We demonstrate that this method is more flexible, results in significan...

متن کامل

Bayesian Inference for Discrete Mixed Graph Models: Normit Networks, Observable Independencies and Infinite Mixtures

Directed mixed graphs are graphical representations that include directed and bidirected edges. Such a class is motivated by dependencies that arise when hidden common causes are marginalized out of a distribution. In previous work, we introduced an efficient Monte Carlo algorithm for sampling from Gaussian mixed graph models. An analogous model for discrete distributions is likely to be doubly...

متن کامل

Bayesian nonparametric hidden semi-Markov models

There is much interest in the Hierarchical Dirichlet Process Hidden Markov Model (HDPHMM) as a natural Bayesian nonparametric extension of the ubiquitous Hidden Markov Model for learning from sequential and time-series data. However, in many settings the HDP-HMM’s strict Markovian constraints are undesirable, particularly if we wish to learn or encode non-geometric state durations. We can exten...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2000